Getting Started
To get started, simply callweave.init()
at the beginning of your script. The argument in weave.init() is a project name that will help you organize your traces.
Tracking Call Metadata
To track metadata from your LangChain calls, you can use theweave.attributes
context manager. This context manager allows you to set custom metadata for a specific block of code, such as a chain or a single request.

Traces
Storing traces of LLM applications in a central database is crucial during both development and production. These traces are essential for debugging and improving your application by providing a valuable dataset. Weave automatically captures traces for your LangChain applications. It will track and log all calls made through the LangChain library, including prompt templates, chains, LLM calls, tools, and agent steps. You can view the traces in the Weave web interface.
Manually Tracing Calls
In addition to automatic tracing, you can manually trace calls using theWeaveTracer
callback or the weave_tracing_enabled
context manager. These methods are akin to using request callbacks in individual parts of a LangChain application.
Note: Weave traces Langchain Runnables by default and this is enabled when you call weave.init()
. You can disable this behaviour by setting the environment variable WEAVE_TRACE_LANGCHAIN
to "false"
before calling weave.init()
. This allows you to control the tracing behaviour of specific chains or even individual requests in your application.
Using WeaveTracer
You can pass the WeaveTracer
callback to individual LangChain components to trace specific requests.
Using weave_tracing_enabled
Context Manager
Alternatively, you can use the weave_tracing_enabled
context manager to enable tracing for specific blocks of code.
Configuration
Upon callingweave.init
, tracing is enabled by setting the environment variable WEAVE_TRACE_LANGCHAIN
to "true"
. This allows Weave to automatically capture traces for your LangChain applications. If you wish to disable this behavior, set the environment variable to "false"
.
Relation to LangChain Callbacks
Auto Logging
The automatic logging provided byweave.init()
is similar to passing a constructor callback to every component in a LangChain application. This means that all interactions, including prompt templates, chains, LLM calls, tools, and agent steps, are tracked globally across your entire application.
Manual Logging
The manual logging methods (WeaveTracer
and weave_tracing_enabled
) are similar to using request callbacks in individual parts of a LangChain application. These methods provide finer control over which parts of your application are traced:
- Constructor Callbacks: Applied to the entire chain or component, logging all interactions consistently.
- Request Callbacks: Applied to specific requests, allowing detailed tracing of particular invocations.
Models and Evaluations
Organizing and evaluating LLMs in applications for various use cases is challenging with multiple components, such as prompts, model configurations, and inference parameters. Using theweave.Model
, you can capture and organize experimental details like system prompts or the models you use, making it easier to compare different iterations.
The following example demonstrates wrapping a Langchain chain in a WeaveModel
:

serve
, and Evaluations
.
Evaluations
Evaluations help you measure the performance of your models. By using theweave.Evaluation
class, you can capture how well your model performs on specific tasks or datasets, making it easier to compare different models and iterations of your application. The following example demonstrates how to evaluate the model we created:

Known Issues
- Tracing Async Calls - A bug in the implementation of the
AsyncCallbackManager
in Langchain results in async calls not being traced in the correct order. We have filed a PR to fix this. Therefore, the order of calls in the trace may not be accurate when usingainvoke
,astream
andabatch
methods in Langchain Runnables.